27 research outputs found

    Extended attention span training system

    Get PDF
    Attention Deficit Disorder (ADD) is a behavioral disorder characterized by the inability to sustain attention long enough to perform activities such as schoolwork or organized play. Treatments for this disorder include medication and brainwave biofeedback training. Brainwave biofeedback training systems feed back information to the trainee showing him how well he is producing the brainwave pattern that indicates attention. The Extended Attention Span Training (EAST) system takes the concept a step further by making a video game more difficult as the player's brainwaves indicate that attention is waning. The trainee can succeed at the game only by maintaining an adequate level of attention. The EAST system is a modification of a biocybernetic system that is currently being used to assess the extent to which automated flight management systems maintain pilot engagement. This biocybernetic system is a product of a program aimed at developing methods to evaluate automated flight deck designs for compatibility with human capabilities. The EAST technology can make a contribution in the fields of medical neuropsychology and neurology, where the emphasis is on cautious, conservative treatment of youngsters with attention disorders

    Method of encouraging attention by correlating video game difficulty with attention level

    Get PDF
    A method of encouraging attention in persons such as those suffering from Attention Deficit Disorder is provided by correlating the level of difficulty of a video game with the level of attention in a subject. A conventional video game comprises a video display which depicts objects for interaction with a player and a difficulty adjuster which increases the difficulty level, e.g., action speed and/or evasiveness of the depicted object, in a predetermined manner. The electrical activity of the brain is measured at selected sites to determine levels of awareness, e.g., activity in the beta, theta, and alpha states. A value is generated based on this measured electrical signal which is indicative of the level of awareness. The difficulty level of the game is increased as the awareness level value decreases and is decreased as this awareness level value increases

    Graded structure in sexual definitions: categorizations of having “had sex” and virginity loss among homosexual and heterosexual men and women

    Get PDF
    Definitions of sexual behavior display a robust hierarchy of agreement regarding whether or not acts should be classed as, for example, sex or virginity loss. The current research offers a theoretical explanation for this hierarchy, proposing that sexual definitions display graded categorical structure, arising from goodness of membership judgments. Moderation of this graded structure is also predicted, with the focus here on how sexual orientation identity affects sexual definitions. A total of 300 18- to 30-year-old participants completed an online survey, rating 18 behaviors for how far each constitutes having “had sex” and virginity loss. Participants fell into one of four groups: heterosexual male or female, gay male or lesbian. The predicted ratings hierarchy emerged, in which bidirectional genital acts were rated significantly higher than unidirectional or nonpenetrative contact, which was in turn rated significantly higher than acts involving no genital contact. Moderation of graded structure was also in line with predictions. Compared to the other groups, the lesbian group significantly upgraded ratings of genital contact that was either unidirectional or nonpenetrative. There was also evidence of upgrading by the gay male sample of anal intercourse ratings. These effects are theorized to reflect group-level variation in experience, contextual perspective, and identity-management. The implications of the findings in relation to previous research are discussed. It is suggested that a graded structure approach can greatly benefit future research into sexual definitions, by permitting variable definitions to be predicted and explained, rather than merely identified

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Method for Visually Integrating Multiple Data Acquisition Technologies for Real Time and Retrospective Analysis

    No full text
    A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis

    Cost Risk Analysis Based on Perception of the Engineering Process

    No full text
    In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering process parameters are elicited from the engineer/expert on the project and are based on that expert's technical knowledge. These are converted by a parametric cost model into a cost estimate. The method discussed makes no assumptions about the distribution underlying the distribution of possible costs, and is not tied to the analysis of previous projects, except through the expert calibrations performed by the parametric cost analyst
    corecore